575 research outputs found

    Cognitive loading affects motor awareness and movement kinematics but not locomotor trajectories during goal-directed walking in a virtual reality environment.

    Get PDF
    The primary purpose of this study was to investigate the effects of cognitive loading on movement kinematics and trajectory formation during goal-directed walking in a virtual reality (VR) environment. The secondary objective was to measure how participants corrected their trajectories for perturbed feedback and how participants' awareness of such perturbations changed under cognitive loading. We asked 14 healthy young adults to walk towards four different target locations in a VR environment while their movements were tracked and played back in real-time on a large projection screen. In 75% of all trials we introduced angular deviations of ±5° to ±30° between the veridical walking trajectory and the visual feedback. Participants performed a second experimental block under cognitive load (serial-7 subtraction, counter-balanced across participants). We measured walking kinematics (joint-angles, velocity profiles) and motor performance (end-point-compensation, trajectory-deviations). Motor awareness was determined by asking participants to rate the veracity of the feedback after every trial. In-line with previous findings in natural settings, participants displayed stereotypical walking trajectories in a VR environment. Our results extend these findings as they demonstrate that taxing cognitive resources did not affect trajectory formation and deviations although it interfered with the participants' movement kinematics, in particular walking velocity. Additionally, we report that motor awareness was selectively impaired by the secondary task in trials with high perceptual uncertainty. Compared with data on eye and arm movements our findings lend support to the hypothesis that the central nervous system (CNS) uses common mechanisms to govern goal-directed movements, including locomotion. We discuss our results with respect to the use of VR methods in gait control and rehabilitation

    Parametric study of EEG sensitivity to phase noise during face processing

    Get PDF
    <b>Background: </b> The present paper examines the visual processing speed of complex objects, here faces, by mapping the relationship between object physical properties and single-trial brain responses. Measuring visual processing speed is challenging because uncontrolled physical differences that co-vary with object categories might affect brain measurements, thus biasing our speed estimates. Recently, we demonstrated that early event-related potential (ERP) differences between faces and objects are preserved even when images differ only in phase information, and amplitude spectra are equated across image categories. Here, we use a parametric design to study how early ERP to faces are shaped by phase information. Subjects performed a two-alternative force choice discrimination between two faces (Experiment 1) or textures (two control experiments). All stimuli had the same amplitude spectrum and were presented at 11 phase noise levels, varying from 0% to 100% in 10% increments, using a linear phase interpolation technique. Single-trial ERP data from each subject were analysed using a multiple linear regression model. <b>Results: </b> Our results show that sensitivity to phase noise in faces emerges progressively in a short time window between the P1 and the N170 ERP visual components. The sensitivity to phase noise starts at about 120–130 ms after stimulus onset and continues for another 25–40 ms. This result was robust both within and across subjects. A control experiment using pink noise textures, which had the same second-order statistics as the faces used in Experiment 1, demonstrated that the sensitivity to phase noise observed for faces cannot be explained by the presence of global image structure alone. A second control experiment used wavelet textures that were matched to the face stimuli in terms of second- and higher-order image statistics. Results from this experiment suggest that higher-order statistics of faces are necessary but not sufficient to obtain the sensitivity to phase noise function observed in response to faces. <b>Conclusion: </b> Our results constitute the first quantitative assessment of the time course of phase information processing by the human visual brain. We interpret our results in a framework that focuses on image statistics and single-trial analyses

    Optimising the glaucoma signal/noise ratio by mapping changes in spatial summation with area-modulated perimetric stimuli

    Get PDF
    Identification of glaucomatous damage and progression by perimetry are limited by measurement and response variability. This study tested the hypothesis that the glaucoma damage signal/noise ratio is greater with stimuli varying in area, either solely, or simultaneously with contrast, than with conventional stimuli varying in contrast only (Goldmann III, GIII). Thirty glaucoma patients and 20 age-similar healthy controls were tested with the Method of Constant Stimuli (MOCS). One stimulus modulated in area (A), one modulated in contrast within Ricco's area (C R ), one modulated in both area and contrast simultaneously (AC), and the reference stimulus was a GIII, modulating in contrast. Stimuli were presented on a common platform with a common scale (energy). A three-stage protocol minimised artefactual MOCS slope bias that can occur due to differences in psychometric function sampling between conditions. Threshold difference from age-matched normal (total deviation), response variability, and signal/noise ratio were compared between stimuli. Total deviation was greater with, and response variability less dependent on defect depth with A, AC, and C R stimuli, compared with GIII. Both A and AC stimuli showed a significantly greater signal/noise ratio than the GIII, indicating that area-modulated stimuli offer benefits over the GIII for identifying early glaucoma and measuring progression

    Vestibular Facilitation of Optic Flow Parsing

    Get PDF
    Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components

    A New Perceptual Bias Reveals Suboptimal Population Decoding of Sensory Responses

    Get PDF
    Several studies have reported optimal population decoding of sensory responses in two-alternative visual discrimination tasks. Such decoding involves integrating noisy neural responses into a more reliable representation of the likelihood that the stimuli under consideration evoked the observed responses. Importantly, an ideal observer must be able to evaluate likelihood with high precision and only consider the likelihood of the two relevant stimuli involved in the discrimination task. We report a new perceptual bias suggesting that observers read out the likelihood representation with remarkably low precision when discriminating grating spatial frequencies. Using spectrally filtered noise, we induced an asymmetry in the likelihood function of spatial frequency. This manipulation mainly affects the likelihood of spatial frequencies that are irrelevant to the task at hand. Nevertheless, we find a significant shift in perceived grating frequency, indicating that observers evaluate likelihoods of a broad range of irrelevant frequencies and discard prior knowledge of stimulus alternatives when performing two-alternative discrimination

    Cortical Contributions to Saccadic Suppression

    Get PDF
    The stability of visual perception is partly maintained by saccadic suppression: the selective reduction of visual sensitivity that accompanies rapid eye movements. The neural mechanisms responsible for this reduced perisaccadic visibility remain unknown, but the Lateral Geniculate Nucleus (LGN) has been proposed as a likely site. Our data show, however, that the saccadic suppression of a target flashed in the right visual hemifield increased with an increase in background luminance in the left visual hemifield. Because each LGN only receives retinal input from a single hemifield, this hemifield interaction cannot be explained solely on the basis of neural mechanisms operating in the LGN. Instead, this suggests that saccadic suppression must involve processing in higher level cortical areas that have access to a considerable part of the ipsilateral hemifield

    Does training with amplitude modulated tones affect tone-vocoded speech perception?

    Get PDF
    Temporal-envelope cues are essential for successful speech perception. We asked here whether training on stimuli containing temporal-envelope cues without speech content can improve the perception of spectrally-degraded (vocoded) speech in which the temporal-envelope (but not the temporal fine structure) is mainly preserved. Two groups of listeners were trained on different amplitude-modulation (AM) based tasks, either AM detection or AM-rate discrimination (21 blocks of 60 trials during two days, 1260 trials; frequency range: 4Hz, 8Hz, and 16Hz), while an additional control group did not undertake any training. Consonant identification in vocoded vowel-consonant-vowel stimuli was tested before and after training on the AM tasks (or at an equivalent time interval for the control group). Following training, only the trained groups showed a significant improvement in the perception of vocoded speech, but the improvement did not significantly differ from that observed for controls. Thus, we do not find convincing evidence that this amount of training with temporal-envelope cues without speech content provide significant benefit for vocoded speech intelligibility. Alternative training regimens using vocoded speech along the linguistic hierarchy should be explored

    Dimension-specific attention directs learning and listening on auditory training tasks

    Get PDF
    The relative contributions of bottom-up versus top-down sensory inputs to auditory learning are not well established. In our experiment, listeners were instructed to perform either a frequency discrimination (FD) task ("FD-train group") or an intensity discrimination (ID) task ("ID-train group") during training on a set of physically identical tones that were impossible to discriminate consistently above chance, allowing us to vary top-down attention whilst keeping bottom-up inputs fixed. A third, control group did not receive any training. Only the FD-train group improved on a FD probe following training, whereas all groups improved on ID following training. However, only the ID-train group also showed changes in performance accuracy as a function of interval with training on the ID task. These findings suggest that top-down, dimension-specific attention can direct auditory learning, even when this learning is not reflected in conventional performance measures of threshold change
    corecore